1 research outputs found

    Fault Tolerant Deep Reinforcement Learning for Aerospace Applications

    Get PDF
    With the growing use of Unmanned Aerial Systems, a new need has risen for intelligent algorithms that not only stabilize or control the system, but rather would also include various factors such as optimality, robustness, adaptability, tracking, decision making, and many more. In this thesis, a deep-learning-based control system is designed with fault-tolerant and disturbance rejection capabilities and applied to a high-order nonlinear dynamic system. The approach uses a Reinforcement Learning architecture that combines concepts from optimal control, robust control, and game theory to create an optimally adaptive control for disturbance rejection. Additionally, a cascaded Observer-based Kalman Filter is formulated for estimating adverse inputs to the system. Numerical simulations are presented using different nonlinear model dynamics and scenarios. The Deep Reinforcement Learning and Observer architecture is demonstrated to be a promising control system alternative for fault tolerant applications
    corecore